67 research outputs found

    Scientific Publications on Primary Biliary Cirrhosis from 2000 through 2010: An 11-Year Survey of the Literature

    Get PDF
    BACKGROUND: Primary biliary cirrhosis (PBC) is a chronic liver disease characterized by intrahepatic bile-duct destruction, cholestasis, and fibrosis. It can lead to cirrhosis and eventually liver failure. PBC also shows some regional differences with respect to incidence and prevalence that are becoming more pronounced each year. Recently, researchers have paid more attention to PBC. To evaluate the development of PBC research during the past 11 years, we determined the quantity and quality of articles on this subject. We also compared the contributions of scientists from the US, UK, Japan, Italy, Germany, and China. METHODS: The English-language papers covering PBC published in journals from 2000 through 2010 were retrieved from the PubMed database. We recorded the number of papers published each year, analyzed the publication type, and calculated the accumulated, average impact factors (IFs) and citations from every country. The quantity and quality of articles on PBC were compared by country. We also contrasted the level of PBC research in China and other countries. RESULTS: The total number of articles did not significantly increase during the past 11 years. The number of articles from the US exceeded those from any other country; the publications from the US also had the highest IFs and the most citations. Four other countries showed complex trends with respect to the quantity and quality of articles about PBC. CONCLUSION: The researchers from the US have contributed the most to the development of PBC research. They currently represent the highest level of research. Some high-level studies, such as RCTs, meta-analyses, and in-depth basic studies should be launched. The gap between China and the advanced level is still enormous. Chinese investigators still have a long way to go

    Bibliometrics of systematic reviews : analysis of citation rates and journal impact factors

    Get PDF
    Background: Systematic reviews are important for informing clinical practice and health policy. The aim of this study was to examine the bibliometrics of systematic reviews and to determine the amount of variance in citations predicted by the journal impact factor (JIF) alone and combined with several other characteristics. Methods: We conducted a bibliometric analysis of 1,261 systematic reviews published in 2008 and the citations to them in the Scopus database from 2008 to June 2012. Potential predictors of the citation impact of the reviews were examined using descriptive, univariate and multiple regression analysis. Results: The mean number of citations per review over four years was 26.5 (SD +/-29.9) or 6.6 citations per review per year. The mean JIF of the journals in which the reviews were published was 4.3 (SD +/-4.2). We found that 17% of the reviews accounted for 50% of the total citations and 1.6% of the reviews were not cited. The number of authors was correlated with the number of citations (r = 0.215, P =5.16) received citations in the bottom quartile (eight or fewer), whereas 9% of reviews published in the lowest JIF quartile (<=2.06) received citations in the top quartile (34 or more). Six percent of reviews in journals with no JIF were also in the first quartile of citations. Conclusions: The JIF predicted over half of the variation in citations to the systematic reviews. However, the distribution of citations was markedly skewed. Some reviews in journals with low JIFs were well-cited and others in higher JIF journals received relatively few citations; hence the JIF did not accurately represent the number of citations to individual systematic reviews

    Three options for citation tracking: Google Scholar, Scopus and Web of Science

    Get PDF
    BACKGROUND: Researchers turn to citation tracking to find the most influential articles for a particular topic and to see how often their own published papers are cited. For years researchers looking for this type of information had only one resource to consult: the Web of Science from Thomson Scientific. In 2004 two competitors emerged – Scopus from Elsevier and Google Scholar from Google. The research reported here uses citation analysis in an observational study examining these three databases; comparing citation counts for articles from two disciplines (oncology and condensed matter physics) and two years (1993 and 2003) to test the hypothesis that the different scholarly publication coverage provided by the three search tools will lead to different citation counts from each. METHODS: Eleven journal titles with varying impact factors were selected from each discipline (oncology and condensed matter physics) using the Journal Citation Reports (JCR). All articles published in the selected titles were retrieved for the years 1993 and 2003, and a stratified random sample of articles was chosen, resulting in four sets of articles. During the week of November 7–12, 2005, the citation counts for each research article were extracted from the three sources. The actual citing references for a subset of the articles published in 2003 were also gathered from each of the three sources. RESULTS: For oncology 1993 Web of Science returned the highest average number of citations, 45.3. Scopus returned the highest average number of citations (8.9) for oncology 2003. Web of Science returned the highest number of citations for condensed matter physics 1993 and 2003 (22.5 and 3.9 respectively). The data showed a significant difference in the mean citation rates between all pairs of resources except between Google Scholar and Scopus for condensed matter physics 2003. For articles published in 2003 Google Scholar returned the largest amount of unique citing material for oncology and Web of Science returned the most for condensed matter physics. CONCLUSION: This study did not identify any one of these three resources as the answer to all citation tracking needs. Scopus showed strength in providing citing literature for current (2003) oncology articles, while Web of Science produced more citing material for 2003 and 1993 condensed matter physics, and 1993 oncology articles. All three tools returned some unique material. Our data indicate that the question of which tool provides the most complete set of citing literature may depend on the subject and publication year of a given article

    Impact Factor: outdated artefact or stepping-stone to journal certification?

    Full text link
    A review of Garfield's journal impact factor and its specific implementation as the Thomson Reuters Impact Factor reveals several weaknesses in this commonly-used indicator of journal standing. Key limitations include the mismatch between citing and cited documents, the deceptive display of three decimals that belies the real precision, and the absence of confidence intervals. These are minor issues that are easily amended and should be corrected, but more substantive improvements are needed. There are indications that the scientific community seeks and needs better certification of journal procedures to improve the quality of published science. Comprehensive certification of editorial and review procedures could help ensure adequate procedures to detect duplicate and fraudulent submissions.Comment: 25 pages, 12 figures, 6 table

    Sharing Detailed Research Data Is Associated with Increased Citation Rate

    Get PDF
    BACKGROUND: Sharing research data provides benefit to the general scientific community, but the benefit is less obvious for the investigator who makes his or her data available. PRINCIPAL FINDINGS: We examined the citation history of 85 cancer microarray clinical trial publications with respect to the availability of their data. The 48% of trials with publicly available microarray data received 85% of the aggregate citations. Publicly available data was significantly (pβ€Š=β€Š0.006) associated with a 69% increase in citations, independently of journal impact factor, date of publication, and author country of origin using linear regression. SIGNIFICANCE: This correlation between publicly available data and increased literature impact may further motivate investigators to share their detailed research data

    Videodensitometric analysis of advanced carotid plaque: correlation with MMP-9 and TIMP-1 expression

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Matrix metalloproteinase-9 (MMP-9) and tissue inhibitor of MMP (TIMP) promote derangement of the extracellular matrix, which is ultimately reflected in plaque images seen on ultrasound. Videodensitometry can identify structural disturbances in plaques.</p> <p>Objectives</p> <p>To establish the correlations between values determined using videodensitometry in B-mode ultrasound images of advanced carotid plaques and the total expression of MMP-9 and TIMP-1 in these removed plaques.</p> <p>Methods</p> <p>Thirty patients underwent ultrasonic tissue characterization of carotid plaques before surgery, using mean gray level (MGL), energy, entropy and homogeneity. Each patient was assigned preoperatively to one of 2 groups: group I, symptomatic patients (n = 16; 12 males; mean age 66.7 Β± 6.8 years), and group II, asymptomatic patients (n = 14; 8 males; mean age 67.6 Β± 6.81 years). Tissue specimens were analyzed for MMP-9 and TIMP-1 expression. Nine carotid arteries were used as normal tissue controls.</p> <p>Results</p> <p>MMP-9 expression levels were elevated in group II and in normal tissues compared to group I (p < 0.001). TIMP-1 levels were higher in group II than in group I, and significantly higher in normal tissues than in group I (p = 0.039). The MGL was higher in group II compared to group I (p = 0.038). Energy had greater values in group II compared to group I (<it>p </it>= 0.02). There were no differences between patient groups in homogeneity and entropy. Energy positively correlated with MMP-9 and TIMP-1 expression (p = 0.012 and p = 0.031 respectively). Homogeneity positively correlated with MMP-9 and TIMP-1 expression (p = 0.034 and p = 0.047 respectively). There were no correlations between protein expression and MGL or entropy.</p> <p>Conclusions</p> <p>Videodensitometric computer analysis of ultrasound scanning images can be used to identify stable carotid plaques, which have higher total expression levels of MMP-9 and TIMP-1 than unstable plaques.</p

    A Three-Stage Colonization Model for the Peopling of the Americas

    Get PDF
    Background: We evaluate the process by which the Americas were originally colonized and propose a three-stage model that integrates current genetic, archaeological, geological, and paleoecological data. Specifically, we analyze mitochondrial and nuclear genetic data by using complementary coalescent models of demographic history and incorporating nongenetic data to enhance the anthropological relevance of the analysis. Methodology/Findings: Bayesian skyline plots, which provide dynamic representations of population size changes over time, indicate that Amerinds went through two stages of growth &lt;40,000 and &lt;15,000 years ago separated by a long period of population stability. Isolation-with-migration coalescent analyses, which utilize data from sister populations to estimate a divergence date and founder population sizes, suggest an Amerind population expansion starting &lt;15,000 years ago. Conclusions/Significance: These results support a model for the peopling of the New World in which Amerind ancestors diverged from the Asian gene pool prior to 40,000 years ago and experienced a gradual population expansion as they moved into Beringia. After a long period of little change in population size in greater Beringia, Amerinds rapidly expanded into the Americas &lt;15,000 years ago either through an interior ice-free corridor or along the coast. This rapid colonization of the New World was achieved by a founder group with an effective population size of &lt;1,000–5,400 individuals. Our model presents a detailed scenario for the timing and scale of the initial migration to the Americas, substantially refines th

    Inferring the joint demographic history of multiple populations from multidimensional SNP frequency data

    Get PDF
    Demographic models built from genetic data play important roles in illuminating prehistorical events and serving as null models in genome scans for selection. We introduce an inference method based on the joint frequency spectrum of genetic variants within and between populations. For candidate models we numerically compute the expected spectrum using a diffusion approximation to the one-locus two-allele Wright-Fisher process, involving up to three simultaneous populations. Our approach is a composite likelihood scheme, since linkage between neutral loci alters the variance but not the expectation of the frequency spectrum. We thus use bootstraps incorporating linkage to estimate uncertainties for parameters and significance values for hypothesis tests. Our method can also incorporate selection on single sites, predicting the joint distribution of selected alleles among populations experiencing a bevy of evolutionary forces, including expansions, contractions, migrations, and admixture. As applications, we model human expansion out of Africa and the settlement of the New World, using 5 Mb of noncoding DNA resequenced in 68 individuals from 4 populations (YRI, CHB, CEU, and MXL) by the Environmental Genome Project. We also combine our demographic model with a previously estimated distribution of selective effects among newly arising amino acid mutations to accurately predict the frequency spectrum of nonsynonymous variants across three continental populations (YRI, CHB, CEU).Comment: 17 pages, 4 figures, supporting information included with sourc

    Aging and Visual Counting

    Get PDF
    Much previous work on how normal aging affects visual enumeration has been focused on the response time required to enumerate, with unlimited stimulus duration. There is a fundamental question, not yet addressed, of how many visual items the aging visual system can enumerate in a "single glance", without the confounding influence of eye movements.We recruited 104 observers with normal vision across the age span (age 21-85). They were briefly (200 ms) presented with a number of well- separated black dots against a gray background on a monitor screen, and were asked to judge the number of dots. By limiting the stimulus presentation time, we can determine the maximum number of visual items an observer can correctly enumerate at a criterion level of performance (counting threshold, defined as the number of visual items at which β‰ˆ63% correct rate on a psychometric curve), without confounding by eye movements. Our findings reveal a 30% decrease in the mean counting threshold of the oldest group (age 61-85: ∼5 dots) when compared with the youngest groups (age 21-40: 7 dots). Surprisingly, despite decreased counting threshold, on average counting accuracy function (defined as the mean number of dots reported for each number tested) is largely unaffected by age, reflecting that the threshold loss can be primarily attributed to increased random errors. We further expanded this interesting finding to show that both young and old adults tend to over-count small numbers, but older observers over-count more.Here we show that age reduces the ability to correctly enumerate in a glance, but the accuracy (veridicality), on average, remains unchanged with advancing age. Control experiments indicate that the degraded performance cannot be explained by optical, retinal or other perceptual factors, but is cortical in origin
    • …
    corecore